perm filename CONCEP[E76,JMC]1 blob
sn#227654 filedate 1976-07-28 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00014 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00003 00002 .require "memo.pub[let,jmc]" source
C00004 00003 .bb INTRODUCTION
C00018 00004 .BB KNOWING WHAT AND KNOWING THAT
C00032 00005 .BB FUNCTIONS FROM THINGS TO CONCEPTS OF THEM
C00036 00006 .BB RELATIONS BETWEEN KNOWING WHAT AND KNOWING THAT
C00039 00007 .bb "MODAL LOGIC (part 1)"
C00046 00008 .bb MORE PHILOSOPHICAL EXAMPLES - MOSTLY WELL KNOWN
C00055 00009 .bb QUANTIFICATION
C00066 00010 .BB EXAMPLES IN ARTIFICIAL INTELLIGENCE
C00074 00011 .bb ABSTRACT LANGUAGES
C00078 00012 .bb PHILOSOPHICAL REMARKS
C00081 00013 .bb NOTES
C00082 00014 .bb REFERENCES
C00086 ENDMK
C⊗;
.require "memo.pub[let,jmc]" source;
.every heading (,draft,)
.once center
draft
.skip 20
.cb FIRST ORDER THEORIES OF INDIVIDUAL CONCEPTS AND PROPOSITIONS
Abstract: We discuss first order theories in which ⊗individual
⊗concepts are admitted as mathematical objects along with the things
that ⊗reify them. This allows very straightforward formalizations of
knowledge, belief, wanting, and necessity in ordinary first order
logic without modal operators. Applications are given in philosophy
and in artificial intelligence.
%7This draft of CONCEP[S76,JMC] PUBbed at {time} on {date}.%1
.skip to column 1
.bb INTRODUCTION
%2"...it seems that hardly anybody proposes to use different variables
for propositions and for truth-values, or different variables for
individuals and individual concepts."%1 - (Carnap 1956, p. 113).
Admitting individual concepts as objects - with
concept-valued constants, variables, functions and expressions -
allows ordinary first order theories of necessity, knowledge, belief
and wanting without modal operators or quotation marks and without
the restrictions on substituting equals for equals that either device
makes necessary.
According to Frege (1892), the meaning of the phrase %2"Mike's
telephone number"%1 in the sentence %2"Pat knows Mike's telephone number"%1
is the concept of Mike's telephone number, whereas its meaning in the
sentence %2"Pat dialed Mike's telephone number"%1 is the number itself.
Thus if we also have %2"Mary's telephone number = Mike's telephone
number"%1, then %2"Pat dialed Mary's telephone number"%1 follows, but
%2"Pat knows Mary's telephone number"%1 does not.
Frege further proposed that a phrase has a ⊗sense
which is a ⊗concept and is its ⊗meaning in ⊗oblique ⊗contexts
like knowing and wanting, and a ⊗denotation which is its
⊗meaning in ⊗direct ⊗contexts.
⊗Denotations are the basis of the semantics of first
order logic and model theory and are well understood, but ⊗sense
has given more trouble, and the modal treatment of oblique
contexts avoids the idea. On the other hand, logicians such as
Carnap (1947 and 1956), Church (1951) and Montague (1974) see a need
for ⊗concepts and have proposed formalizations, but none have been
very satisfactory.
The problem identified by Frege - of suitably limiting the
application of the substitutitivity of equals for equals - arises in
artificial intelligence as well as in philosophy and linguistics for
any system that must represent information about beliefs, knowledge,
desires, or logical necessity - regardless of whether the
representation is declarative or procedural (as in PLANNER and other
AI formalisms).
The present idea is to leave the logic unchanged and to treat
concepts as one kind of object in an ordinary first order theory.
We shall have one term that denotes Mike's telephone number
and a different term denoting the concept
of Mike's telephone number instead of having a single term
whose denotation is the number and whose sense is a concept of it.
The relations among concepts and between concepts and other entities
are expressed by formulas of first order logic.
Ordinary model theory can then be used to study what spaces of
concepts satisfy various sets of axioms.
We treat primarily what Carnap calls ⊗individual ⊗concepts
like %2Mike's telephone number%1 or ⊗Pegasus and not general concepts
like ⊗telephone or ⊗unicorn. Extension to general concepts seems
feasible, but individual concepts provide enough food for thought for
the present.
It seems surprising that such a straightforward and easy
approach should be new.
.skip 1
.BB KNOWING WHAT AND KNOWING THAT
To assert that Pat knows Mike's telephone
number we write
!!e1: %2true Know(Pat,Telephone Mike)%1
with the following conventions:
.item←0;
#. Parentheses are often omitted for one argument functions and
predicates. This purely syntactic convention is not
important. Another convention is to capitalize the first letter of
a constant, variable or function name when its value is a concept.
(We considered also capitalizing the last letter when the arguments are concepts,
but it made the formulas ugly).
#. ⊗Mike is the concept of Mike; i.e. it is the ⊗sense of
the expression %2"Mike"%1. ⊗mike stands for Mike
himself.
#. ⊗Telephone is a function that takes a concept of a person
into a concept of his telephone number. We will also use ⊗telephone
which takes the person himself into the telephone number itself.
Whether the function ⊗Telephone can be identified with the general
concept of a person's telephone number is not settled. For the present,
please suppose not.
#. If ⊗P is a person concept and ⊗X is another concept, then
⊗Know(P,X) is an assertion concept or ⊗proposition meaning that ⊗P
knows the value of ⊗X. In ({eq e1}), therefore, %2Know(Pat,Telephone
Mike)%1 is a proposition and not a truth value. Note that we are
formalizing ⊗knowing ⊗what rather than ⊗knowing ⊗that or ⊗knowing
⊗how. For AI and for other practical purposes, ⊗knowing ⊗what seems
to be the most useful notion of the three.
In English, %2knowing what%1 is written %2knowing whether%1 when the
"knowand" is a proposition.
#. ⊗true(Q) is the truth value, ⊗t or ⊗f, of the proposition ⊗Q,
and we must write ⊗true(Q) in order to assert ⊗Q. Later
we will consider formalisms in which ⊗true has a second argument - a
⊗situation, a ⊗story, a ⊗possible ⊗world, or even a %2partial
possible world%1 (a notion we hope to introduce).
#. The formulas are in a sorted first order logic with
functions and equality. Knowledge, necessity, etc. will be discussed
without extending the logic in any way - solely by the introduction
of predicate and function symbols subject to suitable axioms. In the
present informal treatement, we will not be explicit about sorts, but
we will try to be typographically consistent.
The reader may be nervous about what is meant by ⊗concept.
He will have to remain nervous; no final commitment will be made
in this paper. The formalism is compatible with a variety
of possibilities, and these can be compared using the
models of their first order theories.
However, if ({eq e1}) is to be reasonable, it must not
follow from ({eq e1}) and the fact that Mary's telephone number is the
same as Mike's, that Pat knows Mary's telephone number.
The proposition that Joe knows ⊗whether Pat knows Mike's telephone number,
is written
!!ee1: %2Know(Joe,Know(Pat,Telephone Mike))%1,
and asserting it requires writing
!!ee4: %2true Know(Joe,Know(Pat,Telephone Mike))%1,
while the proposition that Joe knows ⊗that Pat knows Mike's telephone number
is written
!!ee2: %2K(Joe,Know(Pat,Telephone Mike))%1,
where ⊗K(P,Q) is the proposition that ⊗P knows ⊗that ⊗Q. While English is
logically not uniform in that knowing an individual concept means
knowing its value while knowing a proposition means knowing that it
has a particular value, namely ⊗t, there is no reason to make robots
with this infirmity.
We first consider systems in which corresponding to each concept ⊗X,
there is a thing ⊗x of which ⊗X is a concept. Then there is a function
⊗denot such that
!!f2: %2x = denot X%1.
Functions like ⊗Telephone are then related to ⊗denot by equations like
!!e2: %2∀P1 P2.(denot P1 = denot P2 ⊃ denot Telephone P1 = denot Telephone P2)%1.
We call %2denot X%1 the ⊗denotation of the concept ⊗X, and ({eq e2})
asserts that the denotation of the concept of %2P%1's telephone
number depends only on the denotation of the concept %2P%1.
The variables in ({eq e2}) range over concepts of persons, and
we regard ({eq e2}) as asserting
that ⊗Telephone is ⊗extensional with respect to ⊗denot.
Note that our ⊗denot operates on concepts rather than on expressions;
a theory of expressions will also need a denotation function.
From ({eq e2}) and suitable logical axioms
follows the existence of a function ⊗telephone satisfying
!!e2a: %2∀P.(denot Telephone P = telephone denot P)%1.
⊗Know is extensional with respect to ⊗denot in its first argument,
and this expressed by
!!e3: %2∀P1 P2 X.(denot P1 = denot P2 ⊃ denot Know(P1,X) = denot Know(P2,X))%1,
but it is not extensional in its second argument. We can therefore define
a predicate ⊗know(p,X) satisfying
!!e3a: %2∀P X.(true Know(P,X) ≡ know(denot P,X))%1.
(Note that all these predicates and functions are entirely extensional
in the underlying logic, and the notion of extensionality presented
here is relative to ⊗denot.)
The predicate ⊗true and the function ⊗denot are related by
!!e4: %2∀Q.(true Q ≡ (denot Q = t))%1
provided truth values are in the range of ⊗denot, and ⊗denot could
also be provided with a %2(partial) possible world%1 argument.
When we don't assume that all concepts have denotations,
we use a predicate ⊗denotes(X,x) instead of a function.
The extensionality of ⊗Telephone would then be written
!!e41: %2∀P1 P2 x u.(denotes(P1,x)∧denotes(P2,x)∧denotes(Telephone P1,u) ⊃
denotes(Telephone P2,u))%1.
We now introduce the function ⊗Exists satisfying
!!e42: %2∀X.(true Exists X ≡ ∃x.denotes(X,x))%1.
Suppose we want to assert that Pegasus is a horse without asserting
that Pegasus exists. We can do this by introducing the predicate
⊗Ishorse and writing
!!e43: %2true Ishorse Pegasus%1
which is related to the predicate ⊗ishorse by
!!e44: %2∀X x.(denotes(X,x) ⊃ (ishorse x ≡ true Ishorse X))%1.
In this way, we assert extensionality without assuming that all
concepts have denotations. ⊗Exists is extensional in this sense, but
the corresponding predicate ⊗exists is identically true and therefore
dispensable.
In order to combine concepts propositionally, we need analogs
of the propositional operators such as ⊗And, which we shall write as an infix
and axiomatize by
!!e5: %2∀Q1 Q2.(true(Q1 And Q2) ≡ true Q1 ∧ true Q2)%1.
The corresponding formulas for ⊗Or, ⊗Not, ⊗Implies, and
⊗Equiv are
!!e5x: %2∀Q1 Q2.(true(Q1 Or Q2) ≡ true Q1 ∨ true Q2)%1,
!!e5y: %2∀Q.(true(Not Q) ≡ ¬ true Q)%1,
!!e5w: %2∀Q1 Q2.(true(Q1 Implies Q2) ≡ true Q1 ⊃ true Q2)%1,
and
!!e5z: %2∀Q1 Q2.(true(Q1 Equiv Q2) ≡ (true Q1 ≡ true Q2))%1.
The equality symbol "=" is part of the logic
so that %2X = Y%1 asserts that ⊗X and ⊗Y are the same concept.
To write propositions expressing equality, we introduce ⊗Equal(X,Y)
which is a proposition that ⊗X and ⊗Y denote the same
thing if anything. We shall want axioms
!!g1: %2∀X.true Equal(X,X)%1,
!!g2: %2∀X Y.(true Equal(X,Y) ≡ true Equal(Y,X))%1,
and
!!g3: %2∀X Y Z.(true Equal(X,Y) ∧ true Equal(Y,Z) ⊃ true Equal(X,Z)%1
making %2true Equal(X,Y)%1 an equivalence relation, and
!!g4: %2∀X Y x.(true Equal(X,Y) ∧ denotes(X,x) ⊃ denotes(Y,x))%1
which relates it to equality in the logic.
We can make the concept of equality ⊗essentially symmetric by
replacing ({eq g2}) by
!!g2a: %2∀X Y.Equal(X,Y) = Equal(Y,X)%1,
i.e. making the two expressions denote the %2same concept%1.
The statement that Mary has the same telephone as Mike is asserted by
!!g5: %2true Equal(Telephone Mary,Telephone Mike)%1,
and it obviously doesn't follow from this and ({eq e1}) that
!!g6: %2true Know(Pat,Telephone Mary)%1.
To draw this conclusion we need something like
!!g7: %2true K(Pat,Equal(Telephone Mary,Telephone Mike))%1
and suitable axioms about knowledge.
If we were to adopt the convention that a proposition
appearing at the outer level of a sentence is asserted and were to
regard the denotation-valued function as standing for the sense-valued
function when it appears as the second argument of ⊗Know, we would have
a notation that resembles ordinary language in
handling obliquity entirely by context. There is no guarantee that
general statements could be expressed unambiguously without
circumlocution; the fact that the principles of intensional reasoning
haven't yet been stated is evidence against the suitability of
ordinary language for stating them.
.skip 2
.BB FUNCTIONS FROM THINGS TO CONCEPTS OF THEM
While the relation ⊗denotes(X,x) between concepts and things
is many-one, functions going from things to certain concepts of them
seem useful. Some things such as numbers can be regarded as having
⊗standard concepts. Suppose that ⊗Concept1 ⊗n gives
a standard concept of the number ⊗n, so that
!!f8: %2∀n.(denot Concept1 n = n)%1.
We can then have simultaneously
!!f81: %2true Not Knew(Kepler,Number Planets)%1
and
!!f82: %2true Knew(Kepler,Composite Concept1 denot Number Planets)%1.
(We have used ⊗Knew instead of ⊗Know, because we are not now concerned
with formalizing tense.)
({eq f82}) can be condensed using ⊗Composite1 which takes
a number into the proposition that it is composite, i.e.
!!f83: %2Composite1 n = Composite Concept1 n%1
getting
!!f84: %2true Knew(Kepler,Composite1 denot Number Planets)%1.
A further condensation can be achieved using ⊗Composite2 defined by
!!f85: %2Composite2 N = Composite Concept1 denot N%1,
letting us write
!!f86: %2true Knew(Kepler,Composite2 Number Planets)%1,
which is true even though
!!f87: %2true Knew(Kepler,Composite Number Planets)%1
is false. ({eq f87}) is our formal expression of %2"Kepler knew
that the number of planets is composite"%1, while ({eq f82}), ({eq f84}),
and ({eq f86}) express a proposition that can only be stated
awkwardly in English, perhaps as %2"Kepler knew that a certain
number is composite, where this number (perhaps unbeknownst
to Kepler) is the number of planets"%1.
We may also want a map from things to concepts of them in
order to formalize a sentence like, %2"Lassie knows the location of
all her puppies"%1. We write this
!!f88: %2∀x.(ispuppy(x,lassie) ⊃ true Knowd(Lassie,Locationd Conceptd x))%1.
Here ⊗Conceptd takes a puppy into a dog's concept of it, and ⊗Locationd
takes a dog's concept of a puppy into a dog's concept of its location.
The axioms satisfied by ⊗Knowd, ⊗Locationd and ⊗Conceptd can be tailored
to our ideas of what dogs know.
A suitable collection of functions from things to concepts
might permit a language that omitted some individual concepts like
⊗Mike (replacing it with %2Conceptx mike%1) and wrote many sentence
with quantifiers over things rather than over concepts. However, it
is still premature to apply Occam's razor.
.BB RELATIONS BETWEEN KNOWING WHAT AND KNOWING THAT
As mentioned before, %2"Pat knows Mike's telephone number"%1
is written
!!e9: %2true Know(Pat,Telephone Mike)%1.
We can write %2"Pat knows Mike's telephone number is 333-3333"%1
!!e10: %2true K(Pat,Equal(Telephone Mike,Concept1 "333-3333")%1
where ⊗K(P,Q) is the proposition that ⊗denot(P) knows the proposition ⊗Q
and %2Concept1("333-3333")%1 is
some standard concept of that telephone number.
The two ways of expressing knowledge are somewhat interdefinable,
since we can write
!!e11: %2K(P,Q) = (Q And Know(P,Q))%1,
and
!!e12: %2true Know(P,X) ≡ ∃A.(constant A ∧ true K(P,Equal(X,A)))%1.
Here %2constant A%1 asserts that ⊗A is a constant, i.e. a concept
such that we are willing to say that ⊗P knows ⊗X if he knows it
equals ⊗A. This is clear enough for some
domains like integers, but it is not obvious how to treat knowing
a person.
Using the ⊗standard ⊗concept function ⊗Concept1,
we might replace ({eq e12}) by
!!e13: %2true Know(P,X) ≡ ∃a.true K(P,Equal(X,Concept1 a))%1
with similar meaning.
({eq e12}) and ({eq e13}) expresses a ⊗denotational definition
of ⊗Know in terms of ⊗K. A ⊗conceptual definition seems to require
something like
!!e14: %2∀P X.(Know(P,X) = Exists X And K(P,Equal(X,Concept2 denot X)))%1,
where ⊗Concept2 is a suitable function from things to concepts and may
not be available for all sorts of objects.
.bb "MODAL LOGIC (part 1)"
We will divide our treatment of necessity and possibility into
two parts. In %2unquantified modal logic%1, the arguments of the
modal functions will not involve quantification although quantification
occurs in the logic.
%2Nec Q%1 is the proposition that the proposition ⊗Q is necessary,
and %2Poss Q%1 is the proposition that it is possible. To assert
necessity or possibility we must write %2true Nec Q%1 or %2true Poss Q%1.
This can be abbreviated by defining %2nec Q ≡ true Nec Q%1 and %2poss Q%1
correspondingly, but these are predicates in the logic with ⊗t and ⊗f
as values so that ⊗nec ⊗Q cannot be an argument of ⊗nec or ⊗Nec.
Before we even get to modal logic proper we have a decision to
make - shall %2Not Not Q%1 be considered the same proposition as ⊗Q, or
is it merely extensionally equivalent? The first is written
!!e41a: %2∀Q. Not Not Q = Q%1,
and the second
!!F42: %2 ∀Q.true Not Not Q ≡ true Q%1.
The second follows from the first by substitution of equals for equals.
In %2Meaning and Necessity%1, Carnap takes
what amounts to the first alternative,
regarding concepts as L-equivalence classes of expressions. This works
nicely for discussing necessity, but when he wants to discuss knowledge
without assuming that everyone knows Fermat's last theorem if it is true,
he introduces the notion of ⊗intensional ⊗isomorphism and has knowledge
operate on the equivalence classes of this relation.
If we choose the first alternative, then we may go on to
identify any two propositions that can be transformed into each other
by Boolean identities. This can be assured by a small collection of
propositional identities like ({eq e41a}) including associative and
distributive laws for conjunction and disjunction, De Morgan's law,
and the laws governing the propositions ⊗T and ⊗F. In the second
alternative we will want the extensional forms of the same laws.
When we get to quantification a similar choice will arise, but if we
choose the first alternative, it will be undecideable whether two
expressions denote the same concept. I doubt that considerations of
linguistic usage or usefulness in AI will unequivocally recommend one
alternative, so both will have to be studied.
Actually there are more than two alternatives.
Let ⊗M be the free algebra
built up from the "atomic" concepts by the concept forming function
symbols. If ≡≡ is an equivalence relation on ⊗M such that
!!F43a: %2∀X1 X2 ε M.((X1 ≡≡ X2) ⊃ (true X1 ≡ true X2))%1,
then the set of equivalence classes under ≡≡ may be taken as the
set of concepts.
Similar possibilities arise in modal logic. We can choose
between the ⊗conceptual ⊗identity
!!F43: %2∀Q.(Poss Q = Not Nec Not Q)%1,
and the weaker extensional axiom
!!F44: %2∀Q.(true Poss Q ≡ true Not Nec Not Q)%1.
We will write the rest of our modal axioms in extensional form.
We have
!!e45: %2∀Q.(true Nec Q ⊃ true Q)%1,
and
!!e46: %2∀Q1 Q2.(true Nec Q1 ∧ true Nec(Q1 Implies Q2) ⊃ true Nec Q2)%1.
yielding a system equivalent to von Wright's T.
S4 is given by
!!e47: ∀Q.(%2true Nec Q ≡ true Nec Nec Q)%1,
and S5 by
!!e48: %2∀Q.(true Poss Q ≡ true Nec Poss Q)%1.
Actually, there may be no need to commit ourselves to a particular
modal system. We can simultaneously have the functions ⊗NecT, ⊗Nec4 and ⊗Nec5,
related by axioms such as
!!e49: %2∀Q.(true Nec4 Q ⊃ true Nec5 Q)%1
which would seem plausible if we regard S4 as corresponding to provability
in some system and S5 as truth in the intended model of the system.
Presumably we shall want to relate necessity and equality by
the axiom
!!e50: %2∀X.true Nec Equal(X,X)%1.
Certain of Carnap's proposals translate to the stronger relation
!!e51: %2∀X Y.(X=Y ≡ true Nec Equal(X,Y))%1
which asserts that two concepts are the same if and only if the equality
of what they may denote is necessary.
.bb MORE PHILOSOPHICAL EXAMPLES - MOSTLY WELL KNOWN
Some sentences that recur as examples in the philosophical literature
will be expressed in our notation so the treatments can be
compared.
First we have %2"The number of planets = 9"%1 and %2"Necessarily
9 = 9"%1 from which one doesn't want to deduce %2"Necessarily the number
of planets = 9"%1. This example is discussed by Quine (1961) and (Kaplan 1969).
Consider the sentences
!!e20: %2¬nec Equal(Number Planets, Concept1 9)%1
and
!!e21: %2nec Equal(Concept1 number planets,Concept1 9)%1.
Both are true. ({eq e20}) asserts that it is not necessary that the
number of planets be 9, and ({eq e21}) asserts that the number of
planets, once determined, is a number that is necessarily equal to 9.
It is a major virtue of our formalism that both meanings can be
expressed and are readily distinguished.
Sustitutivity of equals holds in the logic but causes no
trouble, because %2"The number of planets = 9"%1
may be written
!!e22: %2number(planets) = 9%1
or, using concepts, as
!!e23: %2true Equal(Number Planets, Concept1 9)%1,
and %2"Necessarily 9=9"%1 is written
!!e24: %2nec Equal(Concept1 9,Concept1 9)%1,
and these don't yield the unwanted conclusion.
Ryle used the sentences %2"Baldwin is a statesman"%1 and
%2"Pickwick is a fiction"%1 to illustrate that parallel sentence
construction does not always give parallel sense.
The first can be rendered in four ways, namely
%2true Statesman Baldwin%1 or %2statesman denot Baldwin%1
or %2statesman baldwin%1 or %2statesman1 Baldwin%1 where the last
asserts that the concept of Baldwin is one of a statesman. The
second can be rendered only as
as %2true Fiction Pickwick%1 or %2fiction1 Pickwick%1.
Quine (1961) considers illegitimate the sentence
!!e26: %2(∃x)(Philip is unaware that x denounced Catiline)%1
obtained from %2"Philip is unaware that Tully denounced Catiline"%1
by existential generalization. In the example, we are also
supposing the truth of %2Philip is aware that Cicero denounced
Catiline"%1.
These sentences are related to (perhaps even explicated by) several
sentences in our system. ⊗Tully and ⊗Cicero are taken as distinct
concepts. The person is called ⊗tully or ⊗cicero in our language,
and we have
!!e27: %2tully = cicero%1,
!!e28: %2denot Tully = cicero%1
and
!!e29: %2denot Cicero = cicero%1.
We can discuss Philip's concept of the person Tully by introducing
a function ⊗Concept2(p1,p2) giving for some persons ⊗p1 and ⊗p2, %2p1%1's
concept of ⊗p2. Such a function need not be unique or always defined,
but in the present case, some of our information may be conveniently
expressed by
!!e30: %2Concept2(philip,tully) = Cicero%1,
asserting that Philip's concept of the person Cicero is ⊗Cicero.
The basic assumptions of Quine's example also include
!!e31: %2true K(Philip,Denounced(Cicero,Catiline))%1
and
!!e32: %2¬true K(Philip,Denounced(Tully,Catiline))%1,
From ({eq e28}), ... ,({eq e32}) we can deduce
!!e33: %2∃P.true Denounced(P,Catiline) And Not K(Philip,Denounced(P,Catiline))%1,
from ({eq e32}) and others, and
!!e34: %2¬∃p.(denounced(p,catiline) ∧ ¬true K(Philip,Denounced(Concept2(philip,p),
Catiline)))%1
using the additional hypotheses
!!e35: %2∀p.(denounced(p,catiline) ⊃ p = cicero)%1,
!!e36: %2denot Catiline = catiline%1,
and
!!e37: %2∀P1 P2.(denot Denounced(P1,P2) ≡ denounced(denot P1,denot P2))%1.
Presumably ({eq e33}) is always true, because we can always construct
a concept whose denotation is Cicero unbeknownst to Philip. The truth
of ({eq e34}) depends on Philip's knowing that someone denounced Catiline
and on the map ⊗Concept2(p1,p2) that gives one person's concept of another.
If we refrain from using a silly map that gives something like
⊗Denouncer(Catiline) as its value, we can get results that correspond
to intuition.
The following sentence attributed to Russell is is discussed
by Kaplan: %2"I thought that your yacht was longer than it is"%1. We
can write it
!!e25: %2true Believed(I,Greater(Length Youryacht,Concept1 denot
Length Youryacht))%1
where we are not analyzing the pronouns or the tense, but are using
⊗denot to get the actual length of the yacht and ⊗Concept1 to get back
a concept of this true length so as to end up with a proposition
that the length of the yacht is greater than that number.
This looks problematical, but if it is consistent, it is probably useful.
In order to express %2"Your yacht is longer than Peter thinks
it is."%1, we need the expression %2Denot(Peter,X)%1 giving a concept
of what Peter thinks the value of ⊗X is. We now write
!!e25b: %2longer(youryacht,denot Denot(Peter,Length Youryacht))%1,
but I am not certain this is a correct translation.
.bb QUANTIFICATION
As the examples of the previous sections have shown,
admitting concepts as objects and introducing standard concept
functions makes "quantifying in" rather easy. However, forming
propositions and individual concepts by quantification requires new
ideas and additional formalism.
We want to continue describing concepts within first order
logic with no logical extensions. Therefore, in order to form
new concepts by quantification and description, we introduce
functions ⊗All, ⊗Exist, and ⊗The such that ⊗All(V,P) is
(approximately) the proposition that %2for all values of V
P is true%1, ⊗Exist(V,P) is the corresponding existential
proposition, and ⊗The(V,P) is the concept of %2the V such that P%1.
Since ⊗All is to be a function, ⊗V and ⊗P must be objects in the logic.
However, ⊗V is semantically a variable in the formation of ⊗All(V,P),
etc., and we will call such objects %2inner variables%1
so as to distinguish them from variables in the logic.
We will use ⊗V, sometimes with subscripts, for a logical variable
ranging over inner variables. We also need some constant symbols for
inner variables (got that?), and we will use doubled letters,
sometimes with subscripts, for these. ⊗XX will be used for
individual concepts, ⊗PP for persons, and ⊗QQ for propositions.
The second argument of ⊗All and friends is a "proposition
with variables in it", but remember that these variables are inner
variables which are constants in the logic. Got that? We won't
introduce a special term for them, but will generally allow concepts
to include inner variables. Thus concepts now include inner
variables like ⊗XX and ⊗PP, and concept forming functions like
⊗Telephone and ⊗Know take the generalized concepts as arguments.
Thus
!!h6: %2Child(Mike,PP) Implies Equal(Telephone PP,Telephone Mike)%1
is a proposition with the inner variable ⊗PP in it to the effect that
if ⊗PP is a child of Mike, then his telephone number is the same as
Mike's, and
!!h5: %2All(PP,Child(Mike,PP) Implies Equal(Telephone PP,Telephone Mike))%1
is the proposition that all Mike's children have the same telephone
number as Mike. Existential propositions are formed similarly to
universal one's, but the function ⊗Exist introduced here should not be confused
with the function ⊗Exists applied to individual concepts introduced
earlier.
In forming individual concepts by the description function ⊗The,
it doesn't matter whether the object described exists. Thus
!!h7: %2The(PP,Child(Mike,PP))%1
is the concept of Mike's only child. %2Exists The(PP,Child(Mike,PP))%1
is the proposition that the described child exists. We have
!!h7a: %2true Exists The(PP,Child(Mike,PP)) ≡ true(Exist(PP,Child(Mike,PP)
And All(PP1,Child(Mike,PP1) Implies Equal(PP,PP1))))%1,
but we may want the equality of the two propositions, i.e.
!!h7b: %2Exists The(PP,Child(Mike,PP)) = Exist(PP,Child(Mike,PP)
And All(PP1,Child(Mike,PP1) Implies Equal(PP,PP1)))%1.
This is part of general problem of when two logically equivalent
concepts are to be regarded as the same.
In order to discuss the truth of propositions and the denotation
of descriptions, we introduce ⊗possible ⊗worlds reluctantly and
with an important difference from the usual treatment.
We need them to give values to the
inner variables and we can also use them for axiomatizing the
modal operators, knowledge, belief and tense. However, for axiomatizing
quantification, we also need a function αα such that
!!h8: π' = αα(%2V,x%1,π)
is the possible world that is the same as the world π except that
the inner variable ⊗V has the value ⊗x instead of the value it
has in π. In this respect our possible worlds resemble the ⊗state
⊗vectors or ⊗environments of computer science more than the
possible worlds of the Kripke treatment of modal logic. Later we
will use this Cartesian product structure on the space of possible
worlds to discuss counterfactual conditionals.
Let π0 be the actual world. Let ⊗true(P,π) mean that the
proposition ⊗P is true in the possible world π. Then
!!h9: %2∀P.(true P ≡ true(P,%1π0)).
Let %2denotes(X,x,%1π) mean that ⊗X denotes ⊗x in π, and let ⊗denot(X,π)
mean the denotation of ⊗X in π when that is defined.
The truth condition for ⊗All(V,P) is then given by
!!h1: %2∀π V P.(true(All(V,P),π) ≡ ∀x.true(P,αα(V,x,π))%1.
Here ⊗V ranges over inner variables, ⊗P ranges over propositions,
and ⊗x ranges over things. There seems to be no harm in making the
domain of ⊗x depend on π.
Similarly
!!h2: %2∀π V P.(true(Exist(V,P),π) ≡ ∃x.true(P,αα(V,x,π))%1.
The meaning of ⊗The(V,P) is given by
!!h3: %2∀π V P x.(true(P,αα(V,x,π)) ∧ ∀y.(true(P,αα(V,y,π)) ⊃ y = x)
⊃ denotes(The(V,P),x,π))%1
and
!!h4: %2∀π V P.(¬∃!x.true(P,αα(V,x,π)) ⊃ ¬true Exists The(V,P))%1.
We also have the following "syntactic" rules governing
propositions involving quantification:
!!h10: %2∀π Q1 Q2 V.(absent(V,Q1) ∧ true(All(V,Q1 Implies Q2),π) ⊃
true(Q1 Implies All(V,Q2),π))%1
and
!!h11: %2∀π V Q X.(true(All(V,Q),π) ⊃ true(Subst(X,V,Q),π))%1.
where ⊗absent(V,X) means that the variable ⊗V is not present in
the concept ⊗X, and ⊗Subst(X,V,Y) is the concept that results
from substituting the concept ⊗X for the variable ⊗V in the
concept ⊗Y.
⊗absent and ⊗Subst are characterized by the following axioms:
!!h12: %2∀V1 V2.(absent(V1,V2) ≡ V1 ≠ V2)%1,
!!h13: %2∀V P X.(absent(V,Know(P,X)) ≡ absent(V,P) ∧ absent(V,X))%1,
axioms similar to ({eq h13}) for other conceptual functions,
!!h14: %2∀V Q.absent(V,All(V,Q))%1,
!!h15: %2∀V Q.absent(V,Exist(V,Q))%1,
!!h15b: %2∀V Q.absent(V,The(V,Q))%1,
!!h16: %2∀V X.Subst(V,V,X) = X%1,
!!h17: %2∀X V.Subst(X,V,V) = X%1,
!!h19: %2∀X V P Y.(Subst(X,V,Know(P,Y)) = Know(Subst(X,V,P),Subst(X,V,Y)))%1,
axioms similar to ({eq h19}) for other functions,
!!h20: %2∀X V Q.(absent(V,Y) ⊃ Subst(X,V,Y) = Y)%1,
!!h21: %2∀X V1 V2 Q.(V1 ≠ V2 ∧ absent(V2,X) ⊃ Subst(X,V1,All(V2,Q))
= All(V2,Subst(X,V1,Q)))%1,
and corresponding axioms to ({eq h21}) for ⊗Exist and ⊗The.
Along with these comes the axiom that binding kills variables, i.e.
!!h22: %2∀V1 V2 Q.(All(V1,Q) = All(V2,Subst(V2,V1,Q)))%1.
The functions ⊗absent and ⊗Subst play a "syntactic" role
in describing the rules of reasoning and don't appear in the
concepts themselves. It seems likely that this is harmless
until we want to form concepts of the laws of reasoning.
.skip 3
.BB EXAMPLES IN ARTIFICIAL INTELLIGENCE
The foregoing discussion of concepts has been mainly concerned
with how to translate into a suitable formal language certain sentences
of ordinary language. The success of the formalization is measured by the
extent to which the logical
consequences of these sentences in the formal system agree with
our intuitions of what these consequences should be.
Another goal of the formalization is to develop an idea of what concepts
really are, but the possible formalizations have not been explored enough
to draw even tentative conclusions about that.
For artificial intelligence, the study of concepts has yet a
different motivation. Our success in making computer programs
with %2general intelligence%1 has been extremely limited, and one
source of the limitation is our inability to formalize what the
world is like in general. We can try to separate the problem
of describing the general aspects of the world from the problem
of using such a description and the facts of a situation to discover
a strategy for achieving a goal. This is called separating the
⊗epistemological and the ⊗heuristic parts of the artificial intelligence
problem and is discussed in (McCarthy and Hayes 1969).
The epistemological part of the problem is very difficult in
general so it is worthwhile to invent formalisms that are valid for
limited classes of problems. Therefore let us consider formalizing
the following fact about purposeful beings: %3Purposeful beings
do what they think will achieve their goals.%1
We attempt this formalization using our formalizations
of concepts and within what we call the %2quasi-static%1 model
of action. The quasi-static model assumes that actions are
discrete and happen one at a time, Each action changes the
situation into a new situation, and the next action by the same
or a different actor takes place in this new situation. Most
of the work in artificial intelligence so far has been within
the quasi-static model - usually without recognizing its
limitations.
The first problem we shall discuss in this framework
is extremely simple. We would like to deduce from the facts
that Pat knows Mike's telephone number and that he wants Joe
to know Mike's telephone number the conclusion that
Joe will come to know Mike's telephone number, (because Pat
will tell it to him). All sorts of simplifications will
be made.
****
A computer program with general intelligence must be able to
represent facts about what information it lacks and where and how it
is to be obtained. The example problem I have been considering is
that of representing what a traveler knows about the information airline clerks,
travel agents, and reservation computers, and airline guides have
relevant to a proposed trip.
This is still rather difficult, but the following considerations
have emerged:
.item←0
#. Unless we formalize ⊗knowing ⊗what, we add to our heuristic
difficulties, because the theorem prover or other reasoner has an extra
existential quantifier to deal with.
#. Similarly in treating belief we need something like
%2denot(Telephone Mike,Pat,s)%1 standing for what Pat believes Mike's
telephone number to be in the situation ⊗s.
Neither is formalized in the philosophical literature.
#. Modal logic offers difficulties especially as we need
often need multiple modalities like %2"believes he wants to know"%1
in a single sentence, and this makes the Kripke possible worlds
semantics for modal logic rather cumbersome.
Modal logic is especially troublesome if oblique contexts are
only a small part of the problem.
Moreover, it is preferable to be able to introduce new modalities
by introducing new predicates instead of having to change the logic.
#. For this reason, the most useful of the earlier treatments seemed
to involve making the argument of knowledge or belief a sentence
or term and weakening the Montague and Kaplan (1963) knowledge axioms
suitably to avoid their paradox. However, it is not easy to implement
a reasoning program that goes into quoted phrases.
Consider the following easier example:
Joe wants to know Mike's telephone number. He knows that
Pat knows it and that Pat likes Joe. We want the program to decide
on Joe's behalf to ask Pat for Mike's telephone number.
*****
This section will be completed with a set of axioms from
which together with the premisses
%2true Want(Joe,Know(Joe,Telephone Mike)),
true K(Joe,Know(Pat,Telephone Mike))%1,
and
%2true K(Joe,Like(Pat,Joe))%1,
we will be able to deduce
%2true Future Know(Joe,Telephone Mike)%1
entirely within the FOL formalism for first order logic.
.skip 3
.bb ABSTRACT LANGUAGES
The way we have treated concepts in this paper, especially
when we put variables in them, suggests trying to identify them with
terms in some language. It seems to me that this can be done
provided we use a suitable notion of ⊗abstract ⊗language.
Ordinarily a language is identified with a set of strings of
symbols taken from some alphabet. McCarthy (1962) introduces the idea of
%2abstract syntax%1, the idea being that it doesn't matter whether
sums are represented %2a+b%1 or %2+ab%1 or %2ab+%1 or by the integer
%22%5a%23%5b%1 or by the LISP S-expression (PLUS A B),
so long as there are predicates for deciding whether
an expression is a sum and functions for forming sums from summands
and functions for extracting the summands from the sum. In
particular, abstract syntax facilitates defining the semantics of
programming languages, and proving the properties of interpreters and
compilers. From that point of view, one can refrain from specifying
any concrete representation of the "expressions" of the language and
consider it merely a collection of abstract synthetic and analytic
functions and predicates for forming, discriminating and taking apart
%2abstract expressions%1. However, the languages considered at that
time always admitted representations as strings of symbols.
If we consider concepts as a free algebra on basic concepts,
then we can regard them as strings of symbols on some alphabet if we
want to, assuming that we don't object to a non-denumerable alphabet
or infinitely long expressions if we want standard concepts for all
the real numbers. However, if we want to regard ⊗Equal(X,Y) and
⊗Equal(Y,X) as the same concept, and hence as the same "expression"
in our language, and we want to regard expressions related by
renaming bound variables as denoting the same concept, then
the algebra is no longer free, and regarding
concepts as strings of symbols becomes awkward even if possible.
It seems better to accept the notion of %2abstract language%1
defined by the collection of functions and predicates that form,
discriminate, and extract the parts of its "expressions".
In that case it would seem that
concepts can be identified with expressions in an abstract language.
.skip 3
.bb PHILOSOPHICAL REMARKS
My motivation for introducing concepts as objects comes from
artificial intelligence. Namely, I want computer programs that can
reason intelligently about who wants what or who knows what.
This leads to considering examples like that of the previous section
and seems to have the following philosophical consequences:
.item←0
#. Since we can't immediately make programs capable of understanding
the whole world, we are interested in formalizations that
allow programs to act intelligently in a limited domains.
#. We are not especially attached to the usages of natural
language except in so far as they suggest useful formalizations.
#. There is no harm in introducing lots of abstract entities
like concepts and no inclination to restrict ourselves to entities
that can be defined finitistically. This is because we aren't interested
in making our own knowledge more secure (as philosophers sometimes define
their task) but
rather want to make a computer program act effectively even at the cost
of having it reason naively. In designing such programs, we take for
granted our own common sense views of the world.
I must confess, however, to finding this attitude philosophically
attractive, i.e. first find a formal system that allows expressing common
sense reasoning - naively if necessary, and then try to make it
secure.
.SKIP 1
.bb NOTES
.PORTION NOTES
.RECEIVE;
.ITEM←NOTE;
.skip 1
.bb REFERENCES
%3Carnap, Rudolf%1 (1956), %2Meaning and Necessity%1, University of Chicago
Press.
%3Church, Alonzo%1 (1951a), The Need for Abstract Entities in Semantic Analysis,
in %2Contributions to the Analysis and Synthesis of Knowledge%1,
Proceedings of the American Academy of Arts and Sciences, %380%1, No. 1
(July 1951), 100-112. Reprinted in %2The Structure of Language%1, edited
by Jerry A. Fodor and Jerrold Katz, Prentice-Hall 1964
____________ (1951b), A formulation of the logic of sense and denotation.
In: P. Henle (ed.), %2Essays in honor of Henry Sheffer%1, pp. 3-24.
New York.
%3Frege, Gottlob%1 (1892), Uber Sinn und Bedeutung. %2Zeitschrift fur
Philosophie und Philosophische Kritik%1 100:25-50. Translated by
H. Feigl under the title "On Sense and Nominatum" in H. Feigl and
W. Sellars (eds.) %2Readings in Philosophical Analysis%1, New York
1949. Translated by M. Black under the title "On Sense and Reference"
in P. Geach and M. Black, %2Translations from the Philosophical
Writings of Gottlob Frege%1, Oxford, 1952.
%3Kaplan, David%1 (1969), Quantifying In, from %2Words and Objections:
Essays on the Work of W.V. Quine%1, edited by D. Davidson and J.
Hintikka, (Dordrecht-Holland: D. Reidel Publishing Co.), pp. 178-214.
Reprinted in (Linsky 1971).
%3Linsky, Leonard%1, ed.(1971) %2Reference and Modality%1, Oxford Readings in
Philosophy, Oxford University Press.
%3McCarthy, J. and Hayes, P.J.%1 (1969) Some Philosophical Problems from
the Standpoint of Artificial Intelligence. %2Machine Intelligence 4%1,
pp. 463-502 (eds Meltzer, B. and Michie, D.). Edinburgh: Edinburgh
University Press.
%3Montague, Richard%1 (1974), %2Formal Philosophy%1, Yale University Press
%3Quine, W.V.O.%1 (1961), %2From a Logical Point of View%1, Harper and Row.
.SKIP 2
.begin verbatim
John McCarthy
Stanford Artificial Intelligence Laboratory
Stanford University
Stanford, California 94305
.end